Nonlinear Optimization in Mathematica with MathOptimizer Professional
نویسندگان
چکیده
The MathOptimizer Professional software package combines Mathematica’s modeling capabilities with an external solver suite for general nonlinear – global and local – optimization. Optimization models formulated in Mathematica are directly transferred along to the solver engine; the results are seamlessly returned to the calling Mathematica document. This approach combines the advantages of a high-level modeling system with the Lipschitz Global Optimizer (LGO) algorithm system that has been in use for over a decade. We summarize the key global optimization concepts, then provide a brief review of LGO and an introduction to MathOptimizer Professional (MOP). MOP functionality is illustrated by solving small-scale, but non-trivial numerical examples, including several circle packing models. 1. Mathematica and its application perspectives in operations research Mathematica is one of the most ambitious and sophisticated software products available today. Its capabilities and a broad range of applications are documented in The Mathematica Book [1], as well as in hundreds of books and many thousands of articles and presentations. Visit the website of Wolfram Research and the Mathematica Information Center [2] for extensive information and links. Operations Research (OR) and Management Science (MS) apply advanced analytical modeling concepts and optimization methods to help make better decisions. The general subjects of OR/MS modeling and optimization are described in references [3-10]. The INFORMS website [11] and numerous other sites also provide topical information, with further links and documents. Mathematica can be put to good use also in the context of OR/MS. Among Mathematica’s many capabilities that are particularly useful are the following: support for advanced numerical and symbolic calculations, rapid prototyping, concise code development, model scalability, external link options (to application packages, to other software products, and to the Internet), and the possibility of integrated documentation, calculation, visualization, and so on within ‘live’ Mathematica notebooks that are fully portable across hardware platforms and operating systems. These features can be used effectively in the various stages of developing OR/MS applications. Model formulation, data analysis, solution strategy and algorithm development, numerical solution and related analysis, and project documentation can be put together from the beginning in the same Mathematica document (if the developer wishes to do so). Such notebooks can also be directly converted to TEX, html, xml, ps, and pdf file formats. Let us note here that, although optimization modeling and solver environments – from Excel spreadsheets, through algebraic modeling languages (AIMMS, AMPL, GAMS, LINGO, MPL), to integrated scientific-technical computing systems (Mathematica, Maple, MATLAB) – may have a slower program execution speed when compared to a compiled ‘pure number crunching’ solver system, the overall development time can be massively reduced by using such systems to develop optimization applications. It is instructive to recall in this context the debate that surrounded the early development of programming languages (ALGOL, BASIC, FORTRAN, PASCAL, and so on), as opposed to machinelevel assembly programming. Within the broad category of OR/MS modeling and optimization, we see particularly strong application potentials for Mathematica in nonlinear systems analysis and optimization. Nonlinear objects, formations and processes are ubiquitous, and are studied in the natural sciences, engineering, economics and finances. Nonlinear system models may not be easy to cast into a pre-defined, ‘boxed’ formulation: instead, problem-specific modeling and code development are often needed, and using Mathematica as the model development platform can be very useful. Some interesting discussions of nonlinear system models and their applications are given in references [12-20]. The more strictly topical global optimization literature discusses nonlinear models and their applications which require a global solution approach [21-31]. copyright © 2005 iJournals.net Mathematica is one of the most ambitious and sophisticated software products available today. Its capabilities and a broad range of applications are documented in The Mathematica Book [1], as well as in hundreds of books and many thousands of articles and presentations. Visit the website of Wolfram Research and the Mathematica Information Center [2] for extensive information and links. Operations Research (OR) and Management Science (MS) apply advanced analytical modeling concepts and optimization methods to help make better decisions. The general subjects of OR/MS modeling and optimization are described in references [3-10]. The INFORMS website [11] and numerous other sites also provide topical information, with further links and documents. Mathematica can be put to good use also in the context of OR/MS. Among Mathematica’s many capabilities that are particularly useful are the following: support for advanced numerical and symbolic calculations, rapid prototyping, concise code development, model scalability, external link options (to application packages, to other software products, and to the Internet), and the possibility of integrated documentation, calculation, visualization, and so on within ‘live’ Mathematica notebooks that are fully portable across hardware platforms and operating systems. These features can be used effectively in the various stages of developing OR/MS applications. Model formulation, data analysis, solution strategy and algorithm development, numerical solution and related analysis, and project documentation can be put together from the beginning in the same Mathematica document (if the developer wishes to do so). Such notebooks can also be directly converted to TEX, html, xml, ps, and pdf file formats. Let us note here that, although optimization modeling and solver environments – from Excel spreadsheets, through algebraic modeling languages (AIMMS, AMPL, GAMS, LINGO, MPL), to integrated scientific-technical computing systems (Mathematica, Maple, MATLAB) – may have a slower program execution speed when compared to a compiled ‘pure number crunching’ solver system, the overall development time can be massively reduced by using such systems to develop optimization applications. It is instructive to recall in this context the debate that surrounded the early development of programming languages (ALGOL, BASIC, FORTRAN, PASCAL, and so on), as opposed to machinelevel assembly programming. Within the broad category of OR/MS modeling and optimization, we see particularly strong application potentials for Mathematica in nonlinear systems analysis and optimization. Nonlinear objects, formations and processes are ubiquitous, and are studied in the natural sciences, engineering, economics and finances. Nonlinear system models may not be easy to cast into a pre-defined, ‘boxed’ formulation: instead, problem-specific modeling and code development are often needed, and using Mathematica as the model development platform can be very useful. Some interesting discussions of nonlinear system models and their applications are given in references [12-20]. The more strictly topical global optimization literature discusses nonlinear models and their applications which require a global solution approach [21-31]. 2. The global optimization challenge We shall consider the continuous global optimization (CGO) model: (1) min f HxL subject to x œ D Õ D := 8x : xl § x § xu; gHxL § 0<. Here x œ is the vector of decision variables ( is the Euclidean real n-dimensional space); f : Ø is the objective function. The set of feasible solutions D Õ is defined by finite (component-wise) lower and upper bounds xl and xu, and by a finite collection of constraint functions g : Ø ; the vector-inequality gHxL § 0 is interpreted component-wise (i.e., g j HxL § 0 , j = 1, ..., m). Let us remark that the relations gHxL § 0 directly cover the formally more general case g j HxL~0, where ~ symbolizes any one of the relational operators =, §, and ¥. In addition, the model (1) also covers formally the more general mixed integer case. To see this, it is sufficient to represent first each finitely bounded integer variable y in binary form y = ‚ 2 i=0 imax yi . Here imax = imaxHyL § `log2 yp (that is, imaxHyL is found by rounding log2 y to the next majorant integer). Then one can replace each of the binary variables yi by the continuous relaxation 0 § yi § 1 and the added non-convex constraint yi H1 yi L § 0. This note immediately implies the inherent computational complexity and resulting general numerical challenge in CGO, since all combinatorial models are formally subsumed by the model (1). If the model functions f and g (the latter component-wise) are all convex, then by classical optimization theory local optimization approaches would suffice to find the globally optimal solution(s) in (1). For example, Newton’s method for finding the unconstrained minimum of a positive definite quadratic function, and its extensions to more general constrained optimization (by Lagrange, Cournot, Karush, Kuhn, Tucker, and others) are directly applicable in the convex case. Unfortunately, model convexity may not hold, or provably does not hold, in many practically relevant cases. Let us emphasize that without further structural assumptions the general CGO model can lead to very difficult numerical problems (even in low dimensions such as n = 1 or 2). For instance, D could be disconnected, and even some of its component subsets could be non-convex; furthermore, f could be highly multiextremal. Consequently, model (1) may have a typically unknown number of global and local optima. There is no general algebraic characterization of global optimality, without proper consideration to the entire CGO model structure. By contrast, in ‘traditional’ nonlinear programming, most exact algorithms aim at finding solutions to the Karush-Kuhn-Tucker system of (necessary, local) optimality conditions: the corresponding system of equations and inequalities in the present framework becomes another GO problem, often at least as complex as the original model. Furthermore, increasing dimensionality (n, m) could lead to exponentially increasing model complexity: with respect to this point see the related discussion by Neumaier [32]. Let us also note that the mere continuity assumptions (related to f and g) do not support the calculation of deterministically guaranteed lower bounds for the optimum value. To establish a valid bounding procedure, one can assume (or postulate) Lipschitz-continuous model structure. For example, if f is Lipschitz-continuous in @xl, xuD, i.e. for an arbitrary pair of points x and x we have » f Hx1 L f Hx2 L » § L »» x –x »», then a single point x in @xl, xuD and the corresponding function value f HxL will allow the computation of a lower bound for f over the entire box @xl, xuD. (Here we assume that L = LH@xl, xuD , f L ¥ 0 is a known overestimate of the, typically unknown, smallest possible Lipschitz-constant for f over @xl, xuD.) Of course, we do not claim that all CGO model instances are extremely difficult. However, in numerous practically relevant cases we may not know a priori, or even after a possibly limited inspection and sampling procedure, how difficult the actual model is: one can think of nonlinear models defined by integrals, differential equations, special functions, deterministic or stochastic simulation procedures, and so on. In fact, many practically motivated models indeed have a difficult multiextremal structure [17, 19, 21-31]. Therefore, global numerical solution procedures have to cope with a broad range of CGO models. The CGO model class includes a number of well-structured, specific cases (such as e.g., concave minimization under convex constraints), as well as far more general problems, e.g. differential convex, Lipschitz-continuous or merely continuous. Therefore one can expect that the corresponding ‘most suitable’ solution approaches will also vary to a considerable extent. On one hand, a general optimization strategy should work for broad model classes, although its efficiency might be lower for more special instances. On the other hand, highly tailored algorithms may not work for problem-classes outside of their intended scope. The next section discusses the LGO solver suite that has been developed with the objective of handling in principle ‘all’ models from the CGO model-class. 2 János D. Pintér and Frank J. Kampas Mathematica in Education and Research Vol.10 No.2 Here x œ is the vector of decision variables ( is the Euclidean real n-dimensional space); f : Ø is the objective function. The set of feasible solutions D Õ is defined by finite (component-wise) lower and upper bounds xl and xu, and by a finite collection of constraint functions g : Ø ; the vector-inequality gHxL § 0 is interpreted component-wise (i.e., g j HxL § 0 , j = 1, ..., m). Let us remark that the relations gHxL § 0 directly cover the formally more general case g j HxL~0, where ~ symbolizes any one of the relational operators =, §, and ¥. In addition, the model (1) also covers formally the more general mixed integer case. To see this, it is sufficient to represent first each finitely bounded integer variable y in binary form y = ‚ 2 i=0 imax yi . Here imax = imaxHyL § `log2 yp (that is, imaxHyL is found by rounding log2 y to the next majorant integer). Then one can replace each of the binary variables yi by the continuous relaxation 0 § yi § 1 and the added non-convex constraint yi H1 yi L § 0. This note immediately implies the inherent computational complexity and resulting general numerical challenge in CGO, since all combinatorial models are formally subsumed by the model (1). If the model functions f and g (the latter component-wise) are all convex, then by classical optimization theory local optimization approaches would suffice to find the globally optimal solution(s) in (1). For example, Newton’s method for finding the unconstrained minimum of a positive definite quadratic function, and its extensions to more general constrained optimization (by Lagrange, Cournot, Karush, Kuhn, Tucker, and others) are directly applicable in the convex case. Unfortunately, model convexity may not hold, or provably does not hold, in many practically relevant cases. Let us emphasize that without further structural assumptions the general CGO model can lead to very difficult numerical problems (even in low dimensions such as n = 1 or 2). For instance, D could be disconnected, and even some of its component subsets could be non-convex; furthermore, f could be highly multiextremal. Consequently, model (1) may have a typically unknown number of global and local optima. There is no general algebraic characterization of global optimality, without proper consideration to the entire CGO model structure. By contrast, in ‘traditional’ nonlinear programming, most exact algorithms aim at finding solutions to the Karush-Kuhn-Tucker system of (necessary, local) optimality conditions: the corresponding system of equations and inequalities in the present framework becomes another GO problem, often at least as complex as the original model. Furthermore, increasing dimensionality (n, m) could lead to exponentially increasing model complexity: with respect to this point see the related discussion by Neumaier [32]. Let us also note that the mere continuity assumptions (related to f and g) do not support the calculation of deterministically guaranteed lower bounds for the optimum value. To establish a valid bounding procedure, one can assume (or postulate) Lipschitz-continuous model structure. For example, if f is Lipschitz-continuous in @xl, xuD, i.e. for an arbitrary pair of points x and x we have » f Hx1 L f Hx2 L » § L »» x –x »», then a single point x in @xl, xuD and the corresponding function value f HxL will allow the computation of a lower bound for f over the entire box @xl, xuD. (Here we assume that L = LH@xl, xuD , f L ¥ 0 is a known overestimate of the, typically unknown, smallest possible Lipschitz-constant for f over @xl, xuD.) Of course, we do not claim that all CGO model instances are extremely difficult. However, in numerous practically relevant cases we may not know a priori, or even after a possibly limited inspection and sampling procedure, how difficult the actual model is: one can think of nonlinear models defined by integrals, differential equations, special functions, deterministic or stochastic simulation procedures, and so on. In fact, many practically motivated models indeed have a difficult multiextremal structure [17, 19, 21-31]. Therefore, global numerical solution procedures have to cope with a broad range of CGO models. The CGO model class includes a number of well-structured, specific cases (such as e.g., concave minimization under convex constraints), as well as far more general problems, e.g. differential convex, Lipschitz-continuous or merely continuous. Therefore one can expect that the corresponding ‘most suitable’ solution approaches will also vary to a considerable extent. On one hand, a general optimization strategy should work for broad model classes, although its efficiency might be lower for more special instances. On the other hand, highly tailored algorithms may not work for problem-classes outside of their intended scope. The next section discusses the LGO solver suite that has been developed with the objective of handling in principle ‘all’ models from the CGO model-class. 3. The LGO solver system In theory, we want to find all global solutions to a CGO model instance, tacitly assuming (or postulating) that the solution set X * is at most countable. The practical objective of global optimization is to find suitable numerical approximations of X * , and of the corresponding optimum value f * = f Hx* L for x* œ X * . There are several logical ways to classify global optimization strategies. Natural dividing lines lie between exact and heuristic methods, as well between deterministic and stochastic algorithms. Here we will comment only on exact methods: for related expositions see references [26, 30, 31, 33, 34]. Exact deterministic algorithms, at least in theory, have a rigorous guarantee for finding at least one global solution, or even all points of X * . However, the associated computational burden rapidly may become excessive, especially for higher-dimensional models, and/or for more complicated model functions. Most CGO, specifically including also combinatorial, models are known to be NP-hard (that is, their solution can be expected to require an exhaustive search in the worst case). Therefore even the fast increase of computational power will not resolve their fundamental numerical intractability. For this reason, in higher dimensions and without special model structure, there is arguably more practical hope in applying exact stochastic algorithms, or other methods which have (also) a stochastic global search component. This observation has played a key role in the gradual development of the LGO solver system that is aimed at the numerical solution of a broad range of instances of the general model (1). LGO, originally referring (only) to the Lipschitz Global Optimizer solver component, has been developed and maintained since 1990, and it is discussed in more detail elsewhere. For theoretical background see reference [26]; references [27, 28, 35] discuss more recent implementations. The software review [36] discusses one of these, the MS Windows ‘look and feel’ LGO development environment. Here we provide only a concise summary of the key concepts and solver components. LGO integrates a suite of global and local nonlinear optimization algorithms. The current LGO implementation includes the following solver modules: Nonlinear Optimization in Mathematica 3 Mathematica in Education and Research Vol.10 No.2 In theory, we want to find all global solutions to a CGO model instance, tacitly assuming (or postulating) that the solution set X * is at most countable. The practical objective of global optimization is to find suitable numerical approximations of X * , and of the corresponding optimum value f * = f Hx* L for x* œ X * . There are several logical ways to classify global optimization strategies. Natural dividing lines lie between exact and heuristic methods, as well between deterministic and stochastic algorithms. Here we will comment only on exact methods: for related expositions see references [26, 30, 31, 33, 34]. Exact deterministic algorithms, at least in theory, have a rigorous guarantee for finding at least one global solution, or even all points of X * . However, the associated computational burden rapidly may become excessive, especially for higher-dimensional models, and/or for more complicated model functions. Most CGO, specifically including also combinatorial, models are known to be NP-hard (that is, their solution can be expected to require an exhaustive search in the worst case). Therefore even the fast increase of computational power will not resolve their fundamental numerical intractability. For this reason, in higher dimensions and without special model structure, there is arguably more practical hope in applying exact stochastic algorithms, or other methods which have (also) a stochastic global search component. This observation has played a key role in the gradual development of the LGO solver system that is aimed at the numerical solution of a broad range of instances of the general model (1). LGO, originally referring (only) to the Lipschitz Global Optimizer solver component, has been developed and maintained since 1990, and it is discussed in more detail elsewhere. For theoretical background see reference [26]; references [27, 28, 35] discuss more recent implementations. The software review [36] discusses one of these, the MS Windows ‘look and feel’ LGO development environment. Here we provide only a concise summary of the key concepts and solver components. LGO integrates a suite of global and local nonlinear optimization algorithms. The current LGO implementation includes the following solver modules: Ë Branch-and-bound global search method (BB) Ë Global adaptive random search (GARS) (single-start) Ë Multi-start based global random search (MS) Ë Local search by the generalized reduced gradient method (LS). All component solvers are implemented derivative-free: this is of particular relevance with respect to applications, in which higher order (gradient, Hessian,...) information is impossible, difficult, or costly to obtain. This feature makes LGO applicable to a broad range of ‘black box’ (complicated, possibly confidential, and so on) models, as long as they are defined by continuous model functions over finite n-dimensional intervals. The global search methodology and convergence results underpinning BB, GARS, and MS are discussed in [26, 37], with extensive further pointers to relevant literature. All three global scope algorithms are automatically followed by the local search algorithm. The generalized reduced gradient method is discussed in numerous textbooks with an emphasis on local optimization [8]. Therefore only a concise review of the proprietary LGO solvers is included below, with notes related to their implementation. The BB solver component implements a theoretically convergent global optimization approach, for Lipschitz-continuous functions f and g. BB combines set partition steps with deterministic and randomized sampling within subsets. The latter strategy supports a statistical bounding procedure that, however, is not rigorous in a deterministic sense (since the Lipschitz information is not known in most cases). The BB solver module will typically generate a close approximation of the global optimizer point(s), before LGO switches over to local search. Note that LGO runtimes can be expected to grow in higher-dimensional and more difficult models, if we want to find a close approximation of the global solution by BB alone. (Let us remark that similar comment applies also with respect to all other theoretically exact, implementable global search strategies.) Pure random search is a simple ‘folklore’ approach that converges to the global solution (set) with probability one, under mild analytical conditions: this includes models with merely continuous functions f and g, without the guaranteed Lipschitz-continuity assumption. The GARS solver mode is based on pure random search, but it adaptively attempts to focus the global search effort on the region which, on the basis of the actual sample results, is estimated to contain the global solution point (or, in general, one of these points). Similarly to BB, this method generates an initial solution for subsequent local search. Multi-start based global search applies a similar search strategy to single-start; however, the total sampling effort is distributed among several global searches. Each of these leads to a ‘promising’ starting point for subsequent local search. This strategy can be recommended in presence of a possibly large number of competitive local optima. Typically, MS requires the most computational effort (due to its multiple local searches); however, in complicated models, it often finds the best numerical solution. Note that all three global scope methods search over the entire range xl § x § xu, while jointly considering the objective function f and the constraints g. This is done by introducing internally an aggregated exact l2 -penalty function defined as: 4 János D. Pintér and Frank J. Kampas Mathematica in Education and Research Vol.10 No.2 All component solvers are implemented derivative-free: this is of particular relevance with respect to applications, in which higher order (gradient, Hessian,...) information is impossible, difficult, or costly to obtain. This feature makes LGO applicable to a broad range of ‘black box’ (complicated, possibly confidential, and so on) models, as long as they are defined by continuous model functions over finite n-dimensional intervals. The global search methodology and convergence results underpinning BB, GARS, and MS are discussed in [26, 37], with extensive further pointers to relevant literature. All three global scope algorithms are automatically followed by the local search algorithm. The generalized reduced gradient method is discussed in numerous textbooks with an emphasis on local optimization [8]. Therefore only a concise review of the proprietary LGO solvers is included below, with notes related to their implementation. The BB solver component implements a theoretically convergent global optimization approach, for Lipschitz-continuous functions f and g. BB combines set partition steps with deterministic and randomized sampling within subsets. The latter strategy supports a statistical bounding procedure that, however, is not rigorous in a deterministic sense (since the Lipschitz information is not known in most cases). The BB solver module will typically generate a close approximation of the global optimizer point(s), before LGO switches over to local search. Note that LGO runtimes can be expected to grow in higher-dimensional and more difficult models, if we want to find a close approximation of the global solution by BB alone. (Let us remark that similar comment applies also with respect to all other theoretically exact, implementable global search strategies.) Pure random search is a simple ‘folklore’ approach that converges to the global solution (set) with probability one, under mild analytical conditions: this includes models with merely continuous functions f and g, without the guaranteed Lipschitz-continuity assumption. The GARS solver mode is based on pure random search, but it adaptively attempts to focus the global search effort on the region which, on the basis of the actual sample results, is estimated to contain the global solution point (or, in general, one of these points). Similarly to BB, this method generates an initial solution for subsequent local search. Multi-start based global search applies a similar search strategy to single-start; however, the total sampling effort is distributed among several global searches. Each of these leads to a ‘promising’ starting point for subsequent local search. This strategy can be recommended in presence of a possibly large number of competitive local optima. Typically, MS requires the most computational effort (due to its multiple local searches); however, in complicated models, it often finds the best numerical solution. Note that all three global scope methods search over the entire range xl § x § xu, while jointly considering the objective function f and the constraints g. This is done by introducing internally an aggregated exact l2 -penalty function defined as: (2) f HxL + ‚ jœE À g j HxL À2 +‚ jœI maxHg j HxL, 0L2 here the index sets E and I of the summation signs denote the sets of equality and inequality constraints. The numerical viability of this approach is based on the tacit assumption that the model is at least ‘acceptably’ scaled. A constraint penalty parameter p > 0 can be adjusted in the LGO solver options file: this value is used to multiply the constraint functions, to give more or less emphasis to the constraints. Ideally, and also in many practical cases, the three global search methods outlined above will give the same answer, except small numerical differences due to rounding errors. In practice, when trying to solve complicated problems with a necessarily limited computational effort, the LGO user may wish to try all three global search options, and then to compare the results obtained. The local search mode, following the standard convex nonlinear optimization paradigm, starts from the given (user-supplied, or global search based) initial solution, and then performs a local search. As mentioned above, this search phase is currently based on the generalized reduced gradient algorithm. A key advantage of the GRG method is that once it attains feasibility, it will maintain it (save numerical errors and other unforeseen problems). Therefore its use can also be recommended, as a stand-alone dense nonlinear solver. The application of the local search mode typically results in an improved solution that is at least feasible and locally optimal. The solver suite approach implemented in LGO supports the numerical solution of both convex and non-convex models under mild (continuity) assumptions. LGO even works in cases, when some of the model functions are discontinuous at certain points over the box region @xl, xuD. The MathOptimizer Professional User Guide [38] provides further details, including modeling and solver tuning tips. Without going into details unnecessary in the present context, let us remark that LGO is currently available for numerous professional C and Fortran compiler platforms, and as a solver engine with links to Excel, GAMS, MPL, Maple, Mathematica and TOMLAB (MATLAB). The MPL/LGO solver, in a demo version, now accompanies the well-received textbook [9]. Nonlinear Optimization in Mathematica 5 Mathematica in Education and Research Vol.10 No.2 4. MathOptimizer Professional MathOptimizer Professional (MOP) is a third party software package [38] for global and local nonlinear optimization. MOP can be used with recent Mathematica versions (4 and higher): it is currently set up for Windows platforms, but other implementations can also be made available. In our illustrative tests discussed here, we use Mathematica 5.0 and 5.1, and P4 machines running under Windows XP Professional. MathOptimizer Professional combines Mathematica’s modeling capabilities with the LGO solver suite: optimization models formulated in Mathematica are directly sent (via MathLink) to LGO; the optimization results are returned to the calling Mathematica notebook. The overall functionality of MOP can be summarized by the following steps: Ë formulation of constrained optimization model, in a corresponding Mathematica notebook, Ë translation of the Mathematica model into C or Fortran code, Ë compilation of the model code into a Dynamic Link Library (DLL); this step obviously requires a suitable compiler, Ë call to the external LGO engine, an executable program that is linked together with the model DLL, Ë model solution and report generation by LGO, and Ë LGO report display within the calling Mathematica notebook. Note that the steps above are fully automatic, except (of course) the model formulation in Mathematica. The approach outlined supports (only) the solution of models defined by Mathematica functions that can be directly converted into C and Fortran program code. However, this still allows the handling of a broad range of continuous nonlinear optimization models (including, of course, all such models that could be described in C and Fortran). Let us also remark that a ‘side-benefit’ of using MOP is its automatic C or Fortran code generation feature: this can be put to good use e.g. in generating test model libraries, or in application development. We have tested MathOptimizer Professional in conjunction with various C and Fortran compilers, including Borland C/C++ version 5, Lahey Fortran 90 and 95; Microsoft Visual C/C++ version 6.0, Salford Fortran FTN77 and FTN95. Note that except some (internal) naming convention differences all MOP functionality is compiler-independent; versions for other compiler platforms can easily be made available upon request. MathOptimizer Professional can be launched by the following Mathematica command: Needs@"MathOptimizerPro`callLGO`"D; Upon execution of the above statement, a command window opens for the MathLink connection to a system call option. (One could trace in this window the compiler and linker messages; however, in case of errors corresponding messages appear also in the Mathematica environment.) The basic functionality of callLGO can be queried by the following Mathematica statement: see the auto-generated reply immediately below. 6 János D. Pintér and Frank J. Kampas Mathematica in Education and Research Vol.10 No.2 ? callLGO callLGO@obj_, cons_List, varswithbounds_List, opts___D: obj is the objective function, cons is a list of the constraints, varswithbounds are the variables and their bounds in the format 88variable, lower bound, initial value for local search, upper bound<...< or 8variable, lower bound, upper bound<...<. Function return is the value of the objective function, a list of rules giving the solution, and the maximum constraint violation. See Options@callLGOD for the options and also see the usage statements of the various options for their possible values. For example, enter ?Method for the possible settings of the Method option. Options@callLGOD {ShowSummary -> False, Method -> MSLS, MaxEvals -> ProblemDetermined, MaxNoImprov -> ProblemDetermined, PenaltyMult -> 1, ModelName -> "LGO Model", DllCompiler -> VC, ShowLGOInputs -> False, LGORandomSeed -> 0, TimeLimit -> 300, TOBJFGL -> -1000000, TOBJFL -> -1000000, MFPI -> 1.*^-6, CVT -> 1.*^-6, KTT -> 1.*^-6} The box below briefly explains the current callLGO options. All options can be changed by the user, as per given specifications. option name default value Description ShowSummary False Do not display Hor displayL LGO report file HLGO.SUML. Method MSLS Multistart global search HMSL followed by local search HLSL. Alternative choices are BBLS, GARSLS, LS. MaxEvals ProblemDetermiÖ ned Allocated global search effort, set by default to 1000 Hn + mL Hglobal search phase stopping criterionL. MaxNoImprov ProblemDetermiÖ ned Allocated global search effort wêo improvement, set by default to 200 Hn + mL Hglobal search phase stopping criterionL. PenaltyMul 1 Penalty multiplier. ModelName LGO Model Model-dependent HdefaultL name. DllCompiler MSVC Supported compilers : BorlandêLaheyêMicrosoftêSalford. callLGO options (continued overpage) Nonlinear Optimization in Mathematica 7 Mathematica in Education and Research Vol.10 No.2 option name default value Description ShowLGOInputs False Do not display Hor displayL the generated LGO input files. LGORandomSeed 0 Set to generate randomized search points. TimeLimit 300 Maximal allowed runtime Hin secondsL Hglobal search phase stopping criterionL. TOBJFGL -1000000 Target objective function value in global search phase Hglobal search phase stopping criterionL. TOBJFL -1000000 Target objective function value in local search phase Hstopping criterionL. MFPI 10-6 Merit function precision improvement tolerance Hlocal search phase stopping criterionL. CVT 10-6 Accepted constraint violation tolerance Hlocal search phase stopping criterionL. KTT 10-6 Kuhn-Tucker condition tolerance Hlocal search phase stopping criterionL.
منابع مشابه
Global and Convex Optimization in Modeling Environments: Compiler-Based, Excel, and Mathematica Implementations
We present a review of several software products that serve to analyze and solve highly nonlinear—specifically including global—optimization problems across different hardware and software platforms. The implementations discussed are LGO, as a stand-alone, but compiler-dependent modeling and solver environment; its Excel platform implementation; and MathOptimizer, a native solver package for Ma...
متن کاملGenetic Search Algorithms to Fuzzy Multiobjective Games: a Mathematica Implementation
Genetic stochastic search algorithms (GAs) have soon demonstrated their helpful contribution for finding solutions to the complex real-life optimization problems. These algorithms have been applied extensively for solving Nash equilibria of fuzzy bimatrix games with single objective. The experience shows the ability of the GAs to find solutions to equivalent nonlinear programming problems, with...
متن کاملA fully symbolic design and modeling of nonlinear glucose control with Control System Professional Suite (CSPS) of Mathematica.
In this case study a fully symbolic design and modeling method are presented for blood glucose control of diabetic patients under intensive care using Mathematica. The analysis is based on a modified two-compartment model proposed by Bergman et al. The applied feedback control law decoupling even the nonlinear model leads to a fully symbolic solution of the closed loop equations. The effectivit...
متن کاملJoint optimization of multiple behavioral and implementation properties of digital IIR filter designs
This paper presents an extensible framework for the simultaneous constrained optimization of multiple properties of digital IIR lters. The framework optimizes the pole-zero locations for behavioral properties of magnitude and phase response, and the implementation property of quality factors, subject to constraints on the same properties. We formulate the constrained nonlinear optimization prob...
متن کاملEmbedding C-xsc Nonlinear Solvers in Mathematica
This work presents the integration of C-XSC nonlinear problem-solving modules based on automatic differentiation into Mathematica via MathLink protocol. ACM: G.4, D.2.12, D.2.13
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005